55 research outputs found

    Toward an affect-sensitive multimodal human-computer interaction

    No full text
    The ability to recognize affective states of a person... This paper argues that next-generation human-computer interaction (HCI) designs need to include the essence of emotional intelligence -- the ability to recognize a user's affective states -- in order to become more human-like, more effective, and more efficient. Affective arousal modulates all nonverbal communicative cues (facial expressions, body movements, and vocal and physiological reactions). In a face-to-face interaction, humans detect and interpret those interactive signals of their communicator with little or no effort. Yet design and development of an automated system that accomplishes these tasks is rather difficult. This paper surveys the past work in solving these problems by a computer and provides a set of recommendations for developing the first part of an intelligent multimodal HCI -- an automatic personalized analyzer of a user's nonverbal affective feedback

    Challenge Patient Dispatching in Mass Casualty Incidents

    Get PDF
    Efficient management of mass casualty incidents is complex, since regular emergency medical services struc-tures have to be switched to a temporary “disaster mode” involving additional operational and tactical struc-tures. Most of the relevant decisions have to be taken on-site in a provisional and chaotic environment. Data gathering about affected persons is one side of the coin; the other side is on-site patient dispatching requiring information exchange with the regular emergency call center and destination hospitals. In this paper we extend a previous conference contribution about the research project e-Triage to the aspect of patient data and on-site patient dispatching. Our considerations reflect the situation in Germany, which deserves from our point of view substantial harmonization

    Facial Action Recognition for Facial Expression Analysis from Static Face Images

    No full text
    Automatic recognition of facial gestures (i.e., facial muscle activity) is rapidly becoming an area of intense interest in the research field of machine vision. In this paper, we present an automated system that we developed to recognize facial gestures in static, frontal- and/or profile-view color face images. A multidetector approach to facial feature localization is utilized to spatially sample the profile contour and the contours of the facial components such as the eyes and the mouth. From the extracted contours of the facial features, we extract ten profile-contour fiducial points and 19 fiducial points of the contours of the facial components. Based on these, 32 individual facial muscle actions (AUs) occurring alone or in combination are recognized using rule-based reasoning. With each scored AU, the utilized algorithm associates a factor denoting the certainty with which the pertinent AU has been scored. A recognition rate of 86% is achieved

    GABEK WinRelan¼ – a Qualitative Method for Crisis Research Engaging Crisis Management Personnel

    Get PDF
    Qualitative research methods like GABEK WinRelan are advantageous tools to analyze and thereby improve crisis management planning and communication systems by interrogating crisis management personnel. Contrary to quantitative methods they help to identify, explore, and structure new important aspects in this field and to formulate more specific research questions. This paper describes the usage and advantages of the qualitative method GABEK WinRelan within crisis management research, particularly within the e-Triage project which aims at the development of an electronic registration system of affected persons in mass casualty incidents. Furthermore it addresses different corresponding research fields like stress within emergency missions and the role GABEK WinRelan could play in examining these research fields

    Cystatin C, a marker for successful aging and glomerular filtration rate, is not influenced by inflammation

    Get PDF
    Abstract Background. The plasma level of cystatin C is a better marker than plasma creatinine for successful aging. It has been assumed that the advantage of cystatin C is not only due to it being a better marker for glomerular filtration rate (GFR) than creatinine, but also because an inflammatory state of a patient induces a raised cystatin C level. However, the observations of an association between cystatin C level and inflammation stem from large cohort studies. The present work concerns the cystatin C levels and degree of inflammation in longitudinal studies of individual subjects without inflammation, who undergo elective surgery. Methods. Cystatin C, creatinine, and the inflammatory markers CRP, serum amyloid A (SAA), haptoglobin and orosomucoid were measured in plasma samples from 35 patients the day before elective surgery and subsequently during seven consecutive days. Results. Twenty patients had CRP-levels below 1 mg/L before surgery and low levels of the additional inflammatory markers. Surgery caused marked inflammation with high peak values of CRP and SAA on the second day after the operation. The cystatin C level did not change significantly during the observation period and did not correlate significantly with the level of any of the four inflammatory markers. The creatinine level was significantly reduced on the first postoperative day but reached the preoperative level towards the end of the observation period. Conclusion. The inflammatory status of a patient does not influence the role of cystatin C as a marker of successful aging, nor of GFR

    Mixed Fuzzy-system and Artificial Neural Network Approach to the Automated Recognition of Mouth Expressions.

    No full text
    One of the most important parts of the automatic recognition of facial expression is recognition of the mouth expression. This paper describes a new approach to this task. Contrary to the works already done in this fields, it is knowledge- rather than graphically- based. We use here a fuzzy-system in combination with an artificial neural network to obtain the recognition of the mouth shape. 1 Introduction At this moment there is an ongoing project in the Knowledge Based Systems group at the Delft University of Technology, which is aimed at the development of the Integrated System for Facial Expression Recognition (ISFER). The multimedia workbench [1] is currently the basis for the development of the system. Some modules of the ISFER system are already developed. For example the interpretation part of the system was presented in [2]. Such systems for facial expression recognition are being developed not only at TU Delft, but also for example at MIT Media Laboratory [3]. One of the part..
    • 

    corecore